Goto

Collaborating Authors

 help exact inference


Fairness constraints can help exact inference in structured prediction

Neural Information Processing Systems

Many inference problems in structured prediction can be modeled as maximizing a score function on a space of labels, where graphs are a natural representation to decompose the total score into a sum of unary (nodes) and pairwise (edges) scores. Given a generative model with an undirected connected graph G and true vector of binary labels $\bar{y}$, it has been previously shown that when G has good expansion properties, such as complete graphs or d-regular expanders, one can exactly recover $\bar{y}$ (with high probability and in polynomial time) from a single noisy observation of each edge and node. We analyze the previously studied generative model by Globerson et al. (2015) under a notion of statistical parity. That is, given a fair binary node labeling, we ask the question whether it is possible to recover the fair assignment, with high probability and in polynomial time, from single edge and node observations. We find that, in contrast to the known trade-offs between fairness and model performance, the addition of the fairness constraint improves the probability of exact recovery. We effectively explain this phenomenon and empirically show how graphs with poor expansion properties, such as grids, are now capable of achieving exact recovery. Finally, as a byproduct of our analysis, we provide a tighter minimum-eigenvalue bound than that which can be derived from Weyl's inequality.


Review for NeurIPS paper: Fairness constraints can help exact inference in structured prediction

Neural Information Processing Systems

My biggest concern is with the way that \epsilon_1 behaves depending on n. Firstly, it seems the choice of the value -n for \rho is arbitrary (with the choice being repercuted in the definition of \epsilon), and this should be discussed more clearly in the text. Next, it is not clear to me why the choice \rho -n is the best. Does it optimize \epsilon_1 in some way? Furthermore, as n tends to \infty, it seems that \epsilon_1 does NOT tend to infinity.


Review for NeurIPS paper: Fairness constraints can help exact inference in structured prediction

Neural Information Processing Systems

This paper is about structures output predictions analysis under'fairness' constraint. This paper shows that constraints relative to fairness can help to increases accuracies. Fairness is one of the notion whose importance is rising in our community, and this paper give interesting insights about it. One of the main issue raised by one of the reviewer that pleads for non acceptance is the "vagueness" of the definition of fairness here. I personally think that this issue should not be taken to much into account here, there is still in our community some "vagueness" according to what the good definition should be.


Fairness constraints can help exact inference in structured prediction

Neural Information Processing Systems

Many inference problems in structured prediction can be modeled as maximizing a score function on a space of labels, where graphs are a natural representation to decompose the total score into a sum of unary (nodes) and pairwise (edges) scores. Given a generative model with an undirected connected graph G and true vector of binary labels \bar{y}, it has been previously shown that when G has good expansion properties, such as complete graphs or d-regular expanders, one can exactly recover \bar{y} (with high probability and in polynomial time) from a single noisy observation of each edge and node. We analyze the previously studied generative model by Globerson et al. (2015) under a notion of statistical parity. That is, given a fair binary node labeling, we ask the question whether it is possible to recover the fair assignment, with high probability and in polynomial time, from single edge and node observations. We find that, in contrast to the known trade-offs between fairness and model performance, the addition of the fairness constraint improves the probability of exact recovery.